Achieving Linear Speedup in Parallel LRU Cache Simulation

نویسنده

  • Tobias Kiesling
چکیده

Previous work on simulation of LRU caching led to the development of parallel algorithms that are efficient for small numbers of processors. However, these algorithms exhibit a sub-linear speedup, where the efficiency seriously decreases with a higher number of processors. In order to achieve linear speedup, this work proposes the use of approximation techniques with the existing parallelization approaches. The nature of the approximate algorithm allows direct control of the introduced error, which can be used to achieve reasonable speedup with a minimal error.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Reduction in Cache Memory Power Consumption based on Replacement Quantity

Today power consumption is considered to be one of the important issues. Therefore, its reduction plays a considerable role in developing systems. Previous studies have shown that approximately 50% of total power consumption is used in cache memories. There is a direct relationship between power consumption and replacement quantity made in cache. The less the number of replacements is, the less...

متن کامل

Reduction in Cache Memory Power Consumption based on Replacement Quantity

Today power consumption is considered to be one of the important issues. Therefore, its reduction plays a considerable role in developing systems. Previous studies have shown that approximately 50% of total power consumption is used in cache memories. There is a direct relationship between power consumption and replacement quantity made in cache. The less the number of replacements is, the less...

متن کامل

Hawkeye: Leveraging Belady’s Algorithm for Improved Cache Replacement

This paper evaluates the Hawkeye cache replacement policy on the Cache Replacement Championship framework. The solution departs from that of the original paper by distinguishing prefetches from demand fetches, so that redundant prefetches can be identified and cached appropriately. Evaluation on SPEC2006 shows that in the absence of prefetching, Hawkeye provides a speedup of 4.5% over LRU (vs. ...

متن کامل

Empirical Study of Parallel LRU Simulation Algorithms

This paper reports on the performance of ve parallel algorithms for simulating a fully associative cache operating under the LRU (Least-Recently-Used) replacement policy. Three of the algorithms are SIMD, and are implemented on the MasPar MP-2 architecture. Two other algorithms are parallelizations of an e cient serial algorithm on the Intel Paragon. One SIMD algorithm is quite simple, but its ...

متن کامل

Cache Replacement Policies for Improving LLC Performance in Multi-Core Processors

Poor cache memory management can have adverse impact on the overall system performance. In a Chip Multi-Core (CMP) scenario, this effect can be enhanced as every core has a private cache apart from a larger shared cache. Replacement policy plays a key role in managing cache data. So it needs to be extremely efficient in order to extract the maximum potential of the cache memory. Over the years ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2004